深度神经网络(DNN)已被证明是针对对抗性示例(AE)的脆弱性,这些例子是恶意设计用于欺骗目标模型的。添加了不可察觉的对抗扰动的正常示例(NES)可能是对DNN的安全威胁。尽管现有的AES检测方法已经达到了很高的精度,但他们未能利用检测到的AE的信息。因此,基于高维扰动提取,我们提出了一种无模型的AES检测方法,其整个过程没有查询受害者模型。研究表明,DNN对高维度敏感。对抗示例中隐藏的对抗性扰动属于高维特征,高维特征是高度预测性和非持胸膜的。 DNN比其他人从高维数据中学习更多细节。在我们的方法中,扰动提取器可以从AES作为高维特征提取对抗扰动,然后训练有素的AES鉴别器确定输入是否为AE。实验结果表明,所提出的方法不仅可以以高精度检测对抗示例,还可以检测AE的特定类别。同时,提取的扰动可用于将AE恢复到NES。
translated by 谷歌翻译
语音很容易泄漏,例如在不同情况下手机记录的语音。语音中的私人内容可能通过语音增强技术恶意提取。语音增强技术与深度神经网络(DNN)一起迅速发展,但是对抗性示例可能会导致DNN失败。在这项工作中,我们提出了一种对抗性方法来降低语音增强系统。实验结果表明,生成的对抗性示例可以在原始示例中删除大多数内容信息,或者通过语音增强来替换目标语音内容。在增强的原始示例和增强的对抗示例识别结果之间的单词错误率(WER)可以达到89.0%。在增强的对抗性示例和目标示例之间的目标攻击的情况低至33.75%。对抗扰动可以将变更速率带到原始示例超过1.4430。这项工作可以防止恶意提取言语。
translated by 谷歌翻译
交通标志检测是无人驾驶系统的具有挑战性的任务,特别是对于检测多尺度目标和检测的实时问题。在交通标志检测过程中,目标的比例大大变化,这将对检测精度产生一定的影响。特征金字塔广泛用于解决这个问题,但它可能会破坏不同的交通标志尺度的功能一致性。此外,在实际应用中,常用方法难以提高多尺度交通标志的检测精度,同时确保实时检测。在本文中,我们提出了一种改进的特征金字塔模型,名为AF-FPN,它利用自适应注意模块(AAM)和特征增强模块(FEM)来减少特征映射生成过程中的信息损失,并提高表示能力特征金字塔。我们用AF-FPN替换了YOLOV5中的原始特征金字塔网络,这在确保实时检测的前提下提高了YOLOV5网络的多尺度目标的检测性能。此外,提出了一种新的自动学习数据增强方法来丰富数据集,提高模型的稳健性,使其更适合实际情况。关于清华腾讯100K(TT100K)数据集的广泛实验结果证明了与多种最先进的方法相比,所提出的方法的有效性和优越性。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译
Recently the deep learning has shown its advantage in representation learning and clustering for time series data. Despite the considerable progress, the existing deep time series clustering approaches mostly seek to train the deep neural network by some instance reconstruction based or cluster distribution based objective, which, however, lack the ability to exploit the sample-wise (or augmentation-wise) contrastive information or even the higher-level (e.g., cluster-level) contrastiveness for learning discriminative and clustering-friendly representations. In light of this, this paper presents a deep temporal contrastive clustering (DTCC) approach, which for the first time, to our knowledge, incorporates the contrastive learning paradigm into the deep time series clustering research. Specifically, with two parallel views generated from the original time series and their augmentations, we utilize two identical auto-encoders to learn the corresponding representations, and in the meantime perform the cluster distribution learning by incorporating a k-means objective. Further, two levels of contrastive learning are simultaneously enforced to capture the instance-level and cluster-level contrastive information, respectively. With the reconstruction loss of the auto-encoder, the cluster distribution loss, and the two levels of contrastive losses jointly optimized, the network architecture is trained in a self-supervised manner and the clustering result can thereby be obtained. Experiments on a variety of time series datasets demonstrate the superiority of our DTCC approach over the state-of-the-art.
translated by 谷歌翻译